Partition Information
To view information of the available nodes and partitions use the following command:
sinfo
For more detailed information for a specific partition:
scontrol show partition <partition-name>
HPC-Elja: Available Partitions / Compute Nodes
In total, the Elja partition has 6,016 cores and 22,272 (21,888) GB of Memory available.
| Count | Name | Cores/Node | Memory/Node (GB) | Features |
|---|---|---|---|---|
| 28 | 48cpu_192mem | 48 (2x24) | 192 (188) | Intel Gold 6248R |
| 55 | 64cpu_256mem | 64 (2x32) | 256 (252) | Intel Platinum 8358 |
| 4 | 128cpu_256mem | 128 (2x64) | 256 (252) | AMD EPYC 7713 |
| 3 | gpu-1xA100 | 64 (2x32) | 192 (188) | Nvidia A100 Tesla GPU |
| 5 | gpu-2xA100 | 64 (2x32) | 192 (188) | Dual Nvidia A100 Tesla GPU |
| 1 | gpu-8xA100 | 128 (2x64) | 1000 (996) | 8 Nvidia A100 Tesla GPUs |
HPC-Elja: Job Limits
Each partition has a maximum seven (7) day timelimit. Additionally, the queues any_cpu and long are provided:
- any_cpu: all CPU nodes, two (2) day timelimit
- 48cpu_192mem: CPU nodes with 48 cores and 192 GB of memory, seven (7) day timelimit
- 64cpu_256mem: CPU nodes with 64 cores and 256 GB of memory, seven (7) day timelimit
- 128cpu_256mem: CPU nodes with 128 cores and 256 GB of memory, seven (7) day timelimit
- long: ten 48cpu and ten 64cpu nodes, fourteen (14) day timelimit
- short: four 48cpu nodes, two (2) day timelimit
SLURM Configuration
SLURM is configured such that 3.94 GB of memory is allocated per core.
Available Memory
On each node, 2-4 GB RAM are reserved for the operating system images (hence the true value is in the parentheses).